Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Fair and verifiable multi-keyword ranked search over encrypted data based on blockchain
PANG Xiaoqiong, WANG Yunting, CHEN Wenjun, JIANG Pan, GAO Yanan
Journal of Computer Applications    2023, 43 (1): 130-139.   DOI: 10.11772/j.issn.1001-9081.2021111904
Abstract256)   HTML16)    PDF (1334KB)(114)       Save
In view of the high cost as well as the limitation of retrieval function of the existing searchable encryption schemes based on blockchain to realize result verification and fair payment, a multi-keyword ranked search scheme supporting verification and fair payment was proposed based on blockchain. In the proposed scheme, the Cloud Service Provider (CSP) was used to store the encrypted index tree and perform search operations, and a lookup table including verification certificates was constructed to assist the smart contract to complete the verification of retrieval results and fair payment, which reduced the complexity of smart contract execution and saved time as well as expensive cost. In addition, the index of balanced binary tree structure was constructed by combining vector space model and Term Frequency-Inverse Document Frequency (TF-IDF), and the index and query vectors were encrypted by using secure K -nearest neighbor, which realized the multi-keyword ranked search supporting dynamic update. Security and performance analysis show that the proposed scheme is secure and feasible in the blockchain environment and under the known ciphertext model. Simulation results show that the proposed scheme can achieve result verification and fair payment with acceptable cost.
Reference | Related Articles | Metrics
Cattle eye image feature extraction method based on improved DenseNet
ZHENG Zhiqiang, HU Xin, WENG Zhi, WANG Yuhe, CHENG Xi
Journal of Computer Applications    2021, 41 (9): 2780-2784.   DOI: 10.11772/j.issn.1001-9081.2020101533
Abstract415)      PDF (1024KB)(344)       Save
To address the problem of low recognition accuracy caused by vanishing gradient and overfitting in the cattle eye image feature extraction process, an improved DenseNet based cattle eye image feature extraction method was proposed. Firstly, the Scaled exponential Linear Unit (SeLU) activation function was used to prevent the vanishing gradient of the network. Secondly, the feature blocks of cattle eye images were randomly discarded by DropBlock, so as to prevent overfitting and strengthen the generalization ability of the network. Finally, the improved dense layers were superimposed to form an improved Dense convolutional Network (DenseNet). Feature information extraction recognition experiments were conducted on the self-built cattle eyes image dataset. Experimental results show that the recognition accuracy, precision and recall of the improved DenseNet are 97.47%, 98.11% and 97.90% respectively, and compared to the network without improvement, the above recognition accuracy rate, precision rate, recall rate are improved by 2.52 percentage points, 3.32 percentage points, 2.94 percentage points respectively. It can be seen that the improved network has higher precision and robustness.
Reference | Related Articles | Metrics
Analysis and improvement of panic concept in social force model
DING Nanzhe, LIU Tingting, LIU Zhen, WANG Yuanyi, CHAI Yanjie, JIANG Lan
Journal of Computer Applications    2021, 41 (8): 2460-2465.   DOI: 10.11772/j.issn.1001-9081.2020101550
Abstract595)      PDF (1782KB)(286)       Save
Social force model is a classical model in crowd simulation. Since it was proposed in 1995, the model has been widely used and modified. In 2000, the concept of panic degree was added to the model to propose an improved version. Although many studies focus on social force model, there are few studies on this concept. Therefore, some key parameters and the concept of panic degree in the social force model were analyzed, and the change of panic degree was used to explain the phenomenons of "fast is slow" and "herd behavior" in crowd evacuation. To overcome the problem in the original model:very few pedestrians may not follow others or may not evacuate at the exit in some conditions caused by the not detailed enough description of pedestrian perception in social force model, the visual field range description of the pedestrian was added and the self-motion state for the pedestrian was redefined and other methods were performed to optimize the social force model. Experimental results show that the improved model can simulate the crowd's herd phenomenon well and is helpful for understanding the concept of panic degree in social force model.
Reference | Related Articles | Metrics
Intrusion detection based on improved triplet network and K-nearest neighbor algorithm
WANG Yue, JIANG Yiming, LAN Julong
Journal of Computer Applications    2021, 41 (7): 1996-2002.   DOI: 10.11772/j.issn.1001-9081.2020081217
Abstract450)      PDF (1105KB)(281)       Save
Intrusion detection is one of the important means to ensure network security. To address the problem that it is difficult to balance detection accuracy and computational efficiency in network intrusion detection, based on the idea of deep metric learning, a network intrusion detection model combining improved Triplet Network (imTN) and K-Nearest Neighbor (KNN) was proposed, namely imTN-KNN. Firstly, a triplet network structure suitable for solving intrusion detection problems was designed to obtain the distance features that are more conducive to the subsequent classification. Secondly, due to the overfitting problem caused by removing the Batch Normalization (BN) layer from the traditional model which affected the detection precision, a Dropout layer and a Sigmoid activation layer were introduced to replace the BN layer, thus improving the model performance. Finally, the loss function of the traditional triplet network model was replaced with the multi-similarity loss function. In addition, the distance feature output of the imTN was used as the input of the KNN algorithm for retraining. Comparison experiments on the benchmark dataset IDS2018 show that compared with the Deep Neural Network based Intrusion Detection System (IDS-DNN) and Convolutional Neural Networks and Long Short Term Memory (CNN-LSTM) based detection model, the detection accuracy of imTN-KNN is improved by 2.76% and 4.68% on Sub_DS3, and the computational efficiency is improved by 69.56% and 74.31%.
Reference | Related Articles | Metrics
Maximum common induced subgraph algorithm based on vertex conflict learning
WANG Yu, LIU Yanli, CHEN Shaowu
Journal of Computer Applications    2021, 41 (6): 1756-1760.   DOI: 10.11772/j.issn.1001-9081.2020091381
Abstract408)      PDF (962KB)(502)       Save
The traditional branching strategies of Maximum Common induced Subgraph (MCS) problem rely on the static properties of graphs and lack learning information about historical searches. In order to solve these problems, a branching strategy based on vertex conflict learning was proposed. Firstly, the reduction value of the upper bound was used as the reward to the branch node for completing a matching action. Secondly, when the optimal solution was updated, the optimal solution obtained actually was the result of continuous inference of the branch nodes. Therefore, the appropriate rewards were given to the branch nodes on the complete search path to strengthen the positive effect of these vertices on search. Finally, the value function of matching action was designed, and the vertices with the maximum cumulative rewards would be selected as new branch nodes. On the basis of Maximum common induced subgraph Split (McSplit) algorithm, an improved McSplit Reinforcement Learning and Routing (McSplitRLR) algorithm combined with the new branching strategy was completed. Experimental results show that, with the same computer and solution time limit, excluding the simple instances solved by all comparison algorithms within 10 seconds, compared to the state-of-the-art algorithms of McSplit and McSplit Solution-Biased Search (McSplitSBS), McSplitRLR solves 109 and 33 more hard instances respectively, and the solution rate increases by 5.6% and 1.6% respectively.
Reference | Related Articles | Metrics
Consensus of two-layer multi-agent systems subjected to cyber-attack
WANG Yunyan, HU Aihua
Journal of Computer Applications    2021, 41 (5): 1399-1405.   DOI: 10.11772/j.issn.1001-9081.2020081159
Abstract441)      PDF (1150KB)(384)       Save
The consensus problem of the two-layer multi-agent systems subjected to cyber-attacks was studied. Aiming at the two-layer multi-agent systems composed of the leaders' layer and the followers' layer, the situation as the following was given:the neighboring agents in the leaders' layer were cooperative, the adjacent agents in the followers' layer were cooperative or competitive, and there was a restraining relationship between some corresponding agents in the leaders' layer and the followers' layer. The consensus problem among the nodes of leaders' layer, followers' layer and two-layer multi-agent systems subjected to cyber-attack was discussed. Based on the related knowledge such as Linear Matrix Inequality (LMI), Lyapunov stability theory and graph theory, the sufficient criteria for consensus between the nodes in the leaders' layer multi-agent system, bipartite consensus between the nodes in the followers' layer multi-agent system and node-to-node bipartite consensus between the nodes in the two-layer multi-agent systems were given. Finally, the numerical simulation examples were given, and the consensus of the two-layer multi-agent systems subjected to cyber-attack was realized, which verified the validity of the proposed criteria.
Reference | Related Articles | Metrics
PageRank-based talent mining algorithm based on Web of Science
LI Chong, WANG Yuchen, DU Weijing, HE Xiaotao, LIU Xuemin, ZHANG Shibo, LI Shuren
Journal of Computer Applications    2021, 41 (5): 1356-1360.   DOI: 10.11772/j.issn.1001-9081.2020081206
Abstract287)      PDF (775KB)(433)       Save
The high-level paper is one of the symbolic achievements of excellent scientific talents. Focusing on the "Web of Science (WOS)" hot research disciplines, on the basis of constructing the Neo4j semantic network graph of academic papers and mining active scientific research communities, the PageRank-based talent mining algorithm was used to realize the mining of outstanding scientific research talents in the scientific research communities. Firstly, the existing talent mining algorithms were studied and analyzed in detail. Secondly, combined with the WOS data, the PageRank-based talent mining algorithm was optimized and implemented by adding consideration factors such as the paper publication time factor, the author's order descending model, the influence of surrounding author nodes on this node, the number of citations of the paper. Finally, experiments and verifications were carried out based on the paper data of the communities of the hot discipline computer science in the past five years. The results show that community-based mining is more targeted, and can quickly find representative excellent and potential talents in various disciplines, and the improved algorithm is more effective and objective.
Reference | Related Articles | Metrics
Dense crowd counting model based on spatial dimensional recurrent perception network
FU Qianhui, LI Qingkui, FU Jingnan, WANG Yu
Journal of Computer Applications    2021, 41 (2): 544-549.   DOI: 10.11772/j.issn.1001-9081.2020050623
Abstract357)      PDF (1486KB)(826)       Save
Considering the limitations of the feature extraction of high-density crowd images with perspective distortion, a crowd counting model, named LMCNN, that combines Global Feature Perception Network (GFPNet) and Local Association Feature Perception Network (LAFPNet) was proposed. GFPNet was the backbone network of LMCNN, its output feature map was serialized and used as the input of LAFPNet. And the characteristic that the Recurrent Neural Network (RNN) senses the local association features on the time-series dimension was used to map the single spatial static feature to the feature space with local sequence association features, thus effectively reducing the impact of perspective distortion on crowd density estimation. To verify the effectiveness of the proposed model, experiments were conducted on Shanghaitech Part A and UCF_CC_50 datasets. The results show that compared to Atrous Convolutions Spatial Pyramid Network (ACSPNet), the Mean Absolute Error (MAE) of LMCNN was decreased by 18.7% and 20.3% at least, respectively, and the Mean Square Error (MSE) was decreased by 22.3% and 22.6% at least, respectively. The focus of LMCNN is the association between the front and back features on the spatial dimension, and by fully integrating the spatial dimension features and the sequence features in a single image, the crowd counting error caused by perspective distortion is reduced, and the number of people in dense areas can be more accurately predicted, thereby improving the regression accuracy of crowd density.
Reference | Related Articles | Metrics
Auditable signature scheme for blockchain based on secure multi-party
WANG Yunye, CHENG Yage, JIA Zhijuan, FU Junjun, YANG Yanyan, HE Yuchu, MA Wei
Journal of Computer Applications    2020, 40 (9): 2639-2645.   DOI: 10.11772/j.issn.1001-9081.2020010096
Abstract363)      PDF (983KB)(695)       Save
Aiming at the credibility problem, a secure multi-party blockchain auditable signature scheme was proposed. In the proposed scheme, the trust vector with timestamp was introduced, and a trust matrix composed of multi-dimensional vector groups was constructed for regularly recording the trustworthy behavior of participants, so that a credible evaluation mechanism for the participants was established. Finally, the evaluation results were stored in the blockchain as a basis for verification. On the premise of ensuring that the participants are trusted, a secure and trusted signature scheme was constructed through secret sharing technology. Security analysis shows that the proposed scheme can effectively reduce the damages brought by the malicious participants, detect the credibility of participants, and resist mobile attacks. Performance analysis shows that the proposed scheme has lower computational complexity and higher execution efficiency.
Reference | Related Articles | Metrics
Classification model for class imbalanced traffic data
LIU Dan, YAO Lishuang, WANG Yunfeng, PEI Zuofei
Journal of Computer Applications    2020, 40 (8): 2327-2333.   DOI: 10.11772/j.issn.1001-9081.2019122241
Abstract375)      PDF (1110KB)(404)       Save
In the process of network traffic classification, the traditional model has poor classification on minority classes and cannot be updated frequently and timely. In order to solve the problems, a network Traffic Classification Model based on Ensemble Learning (ELTCM) was proposed. First, in order to reduce the impact of class imbalance problem, feature metrics biased towards minority classes were defined according to the class distribution information, and the weighted symmetric uncertainty and Approximate Markov Blanket (AMB) were used to reduce the dimensionality of network traffic features. Then, early concept drift detection was introduced to enhance the model's ability to cope with the changes in traffic features as the network changed. At the same time, incremental learning was used to improve the flexibility of model update training. Experimental results on real traffic datasets show that compared with the Internet Traffic Classification based on C4.5 Decision Tree (DTITC) and Classification Model for Concept Drift Detection based on ErrorRate (ERCDD), the proposed ELTCM has the average overall accuracy increased by 1.13% and 0.26% respectively, and the classification performance of minority classes all higher than those of the models. ELTCM has high generalization ability, and can effectively improve the classification performance of minority classes without sacrificing the overall classification accuracy.
Reference | Related Articles | Metrics
User association mechanism based on evolutionary game
WANG Yueping, XU Tao
Journal of Computer Applications    2020, 40 (5): 1392-1396.   DOI: 10.11772/j.issn.1001-9081.2019112024
Abstract323)      PDF (546KB)(255)       Save

User association is the problem that a wireless terminal chooses to access one serving base station. User association can be seen as the first step in wireless resource management, which has an important impact on network performance, and plays a very important role in achieving load balance, interference control, improvement of spectrum and energy efficiency. Aiming at the characteristics of multi-layer heterogeneous network including macro base stations and full-duplex small base stations, a separate multiple access mechanism was considered, which means allowing a terminal access different and multiple base stations in the uplink and downlink, so as to realize the performance improvement. On this basis, the user association problem with separation of uplink and downlink multi-access in heterogeneous network was modeled into an evolutionary game problem, in which the users act as the players to perform the resource competition with each other, the access choices of base stations are strategies in the game, and every user wants to obtain the maximum of own effectiveness by the choice of strategy. Besides, a low-complex self-organized user association algorithm was designed based on evolutionary game and reinforcement learning. In the algorithm, the user was able to adjust the strategy according to the revenue of current strategy choice, and reached an equilibrium state in the end, realizing user fairness. Finally, extensive simulations were performed to verify the effectiveness of the proposed method.

Reference | Related Articles | Metrics
Region division method of brain slice image based on deep learning
WANG Songwei, ZHAO Qiuyang, WANG Yuhang, RAO Xiaoping
Journal of Computer Applications    2020, 40 (4): 1202-1208.   DOI: 10.11772/j.issn.1001-9081.2019091521
Abstract665)      PDF (3502KB)(445)       Save
Aiming at the problem of poor accuracy of automatic region division of mouse brain slice image using traditional multimodal registration method,an unsupervised multimodal region division method of brain slice image was proposed. Firstly,based on the mouse brain map,the Atlas brain map and the Average Template brain map in the Allen Reference Atlases (ARA) database corresponding to the brain slice region division were obtained. Then the Average Template brain map and the mouse brain slices were pre-registered and modal transformed by affine transformation preprocessing and Principal Component Analysis Net-based Structural Representation(PCANet-SR)network processing. After that,according to U-net and the spatial transformation network,the unsupervised registration was realized,and the registration deformation relationship was applied to the Atlas brain map. Finally,the edge contour of the Atlas brain map extracted by the registration deformation was merged with the original mouse brain slices in order to realize the region division of the brain slice image. Compared with the existing PCANet-SR+B spline registration method,experimental results show that the Root Mean Square Error(RMSE)of the registration accuracy index of this method reduced by 1. 6%,the Correlation Coefficient(CC)and the Mutual Information(MI)increased by 3. 5% and 0. 78% respectively. The proposed method can quickly realize the unsupervised multimodal registration task of the brain slice image,and make the brain slice regions be divided accurately.
Reference | Related Articles | Metrics
MP-CGAN: night single image dehazing algorithm based on Msmall-Patch training
WANG Yunfei, WANG Yuanyu
Journal of Computer Applications    2020, 40 (3): 865-871.   DOI: 10.11772/j.issn.1001-9081.2019071219
Abstract510)      PDF (2098KB)(398)       Save
Aiming at the problems of color distortion and noise in night image dehazing based on Dark Channel Prior (DCP) and atmospheric scattering model method, a Conditional Generated Adversarial Network (CGAN) dehazing algorithm based on Msmall-Patch training (MP-CGAN) was proposed. Firstly, UNet and Densely connected convolutional Network (DenseNet) were combined into a UDNet (U Densely connected convolutional Network) as the generator network structure. Secondly, Msmall-Patch training was performed on the generator and discriminator networks, that was, multiple small penalty regions were extracted by using the Min-Pool or Max-Pool method for the final Patch of the discriminator. These regions were degraded or easily misjudged. And, severe penalty loss was proposed for these regions, that was, multiple maximum loss values in the discriminator output were selected as the loss. Finally, a new composite loss function was proposed by combining the severe loss function, the perceptual loss and the adversarial perceptual loss. On the test set, compared with the Haze Density Prediction Network algorithm (HDP-Net), the proposed algorithm has the PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity index) increased by 59% and 37% respectively; compared with the super-pixel algorithm, the proposed algorithm has the PSNR and SSIM increased by 59% and 48% respectively. The experimental results show that the proposed algorithm can reduce the noise artifacts generated during the CGAN training process, and improve the night image dehazing quality.
Reference | Related Articles | Metrics
Face hallucination algorithm via combined learning
XU Ruobo, LU Tao, WANG Yu, ZHANG Yanduo
Journal of Computer Applications    2020, 40 (3): 710-716.   DOI: 10.11772/j.issn.1001-9081.2019071178
Abstract460)      PDF (1595KB)(379)       Save
Most of the existing deep learning based face hallucination algorithms only use a single network partition to reconstruct high-resolution output images without considering the structural information in the face images, resulting in the lack of sufficient details in the reconstruction of vital organs on the face. Therefore, a face hallucination algorithm based on combined learning was proposed to tackle this problem. In the algorithm, the regions of interest were reconstructed independently by utilizing the advantages of different deep learning models, thus the data distribution of each face region was different to each other in the process of network training, and different sub-networks were able to obtain more accurate prior information. Firstly, for the face image, the superpixel segmentation algorithm was used to generate the facial component parts and facial background image. Secondly, the facial component image patches were independently reconstructed by the Component-Generative Adversarial Network (C-GAN) and the facial background reconstruction network was used to generate the facial background image. Thirdly, the facial component fusion network was used to adaptively fuse the facial component image patches reconstructed by two different models. Finally, the generated facial component image patches were merged into the facial background image to reconstruct the final face image. The experimental results on FEI dataset show that the Peak Signal to Noise Ratio (PSNR) of the proposed algorithm is respectively 1.23 dB and 1.11 dB higher than that of the face image hallucination algorithms Learning to hallucinate face images via Component Generation and Enhancement (LCGE) and Enhanced Discriminative Generative Adversarial Network (EDGAN). The proposed algorithm can perform combined learning of the advantages of different deep learning models to learn and reconstruct more accurate face images as well as expand the sources of image reconstruction prior information.
Reference | Related Articles | Metrics
Hot new word discovery applied for detection of network hot news
WANG Yu, XU Jianmin
Journal of Computer Applications    2020, 40 (12): 3513-3519.   DOI: 10.11772/j.issn.1001-9081.2020040549
Abstract540)      PDF (987KB)(425)       Save
By analyzing the characteristics of hot words in network news, a hot new word discovery method was proposed for detection of network hot news. Firstly, the Frequent Pattern tree (FP-tree) algorithm was improved to extract the frequent word strings as the hot new word candidates. A lot of useless information in the news data was reduced by deleting the infrequent 1-word strings from news data and cutting news data based on infrequent 1, 2-infrequent word strings, so as to greatly decrease the complexity of FP-tree. Secondly, the multivariant Pointwise Mutual Information (PMI)was formed by expanding the binary PMI, and the Time PMI (TPMI) was formed by introducing the time features of hot words. TPMI was used to judge the internal cohesion degree and timeliness of hot new word candidates, so as to remove the unqualified candidates. Finally, the branch entropy was used to determine the boundary of new words for selecting new hot words. The dataset formed by 7 222 news headlines collected from Baidu network news was used for the experiments. When the events reported at least 8 times in half a month were selected as hot news, and the adjustment coefficient of time feature was set 2, TPMI correctly recognized 51 hot words, missed 2 hot words because they were hot for a long time and 2 less-hot words because they occurred insufficiently; the multivariant PMI without time features correctly recognized all 55 hot words, but incorrectly recognized 97 non-hot words. It can be seen from the analysis that the time and space cost is reduced by decreasing the complexity of FP-tree, and experimental results show that the recognition rate of hot new words is improved by introducing time feature during the hot new word judgement.
Reference | Related Articles | Metrics
Human action recognition model based on tightly coupled spatiotemporal two-stream convolution neural network
LI Qian, YANG Wenzhu, CHEN Xiangyang, YUAN Tongtong, WANG Yuxia
Journal of Computer Applications    2020, 40 (11): 3178-3183.   DOI: 10.11772/j.issn.1001-9081.2020030399
Abstract306)      PDF (2537KB)(368)       Save
In consideration of the problems of low utilization rate of action information and insufficient attention of temporal information in video human action recognition, a human action recognition model based on tightly coupled spatiotemporal two-stream convolutional neural network was proposed. Firstly, two 2D convolutional neural networks were used to separately extract the spatial and temporal features in the video. Then, the forget gate module in the Long Short-Term Memory (LSTM) network was used to establish the feature-level tightly coupled connections between different sampled segments to achieve the transfer of information flow. After that, the Bi-directional Long Short-Term Memory (Bi-LSTM) network was used to evaluate the importance of each sampled segment and assign adaptive weight to it. Finally, the spatiotemporal two-stream features were combined to complete the human action recognition. The accuracy rates of this model on the datasets UCF101 and HMDB51 selected for the experiment and verification were 94.2% and 70.1% respectively. Experimental results show that the proposed model can effectively improve the utilization rate of temporal information and the ability of overall action representation, thus significantly improving the accuracy of human action recognition.
Reference | Related Articles | Metrics
Crack detection for aircraft skin based on image analysis
XUE Qian, LUO Qijun, WANG Yue
Journal of Computer Applications    2019, 39 (7): 2116-2120.   DOI: 10.11772/j.issn.1001-9081.2019010092
Abstract596)      PDF (795KB)(303)       Save

To realize automatic crack detection for aircraft skin, skin image processing and parameter estimation methods were studied based on scanning images obtained by pan-and-tilt long-focus camera. Firstly, considering the characteristics of aircraft skin images, light compensation, adaptive grayscale stretching, and local OTSU segmentation were carried out to obtain the binary images of cracks. Then, the characteristics like area and rectangularity of the connected domains were calculated to remove block noises in the images. After that, thinning and deburring were operated on cracks presented in the denoised binary images, and all branches of crack were separated by deleting the nodes of cracks. Finally, using the branch pixels as indexes, information of each crack branch such as the length, average width, maximum width, starting point, end point, midpoint, orientation, and number of branches were calculated by tracing pixels and the report was output by the crack detection software. The experimental results demonstrate that cracks wider than 1 mm can be detected effectively by the proposed method, which provides a feasible means for automatic detection of aircraft skin cracks in fuselage and wings.

Reference | Related Articles | Metrics
Integration of cost-sensitive algorithms based on average distance of K-nearest neighbor samples
YANG Hao, WANG Yu, ZHANG Zhongyuan
Journal of Computer Applications    2019, 39 (7): 1883-1887.   DOI: 10.11772/j.issn.1001-9081.2018122483
Abstract384)      PDF (794KB)(412)       Save

To solve the problem of classification of unbalanced data sets and the problem that the general cost-sensitive learning algorithm can not be applied to multi-classification condition, an integration method of cost-sensitive algorithm based on average distance of K-Nearest Neighbor (KNN) samples was proposed. Firstly, according to the idea of maximizing the minimum interval, a resampling method for reducing the density of decision boundary samples was proposed. Then, the average distance of each type of samples was used as the basis of judgment of classification results, and a learning algorithm based on Bayesian decision-making theory was proposed, which made the improved algorithm cost sensitive. Finally, the improved cost-sensitive algorithm was integrated according to the K value. The weight of each base learner was adjusted according to the principle of minimum cost, obtaining the cost-sensitive AdaBoost algorithm aiming at the minimum total misclassification cost. The experimental results show that compared with traditional KNN algorithm, the improved algorithm reduces the average misclassification cost by 31.4 percentage points and has better cost sensitivity.

Reference | Related Articles | Metrics
Detection of new ground buildings based on generative adversarial network
WANG Yulong, PU Jun, ZHAO Jianghua, LI Jianhui
Journal of Computer Applications    2019, 39 (5): 1518-1522.   DOI: 10.11772/j.issn.1001-9081.2018102083
Abstract673)      PDF (841KB)(449)       Save
Aiming at the inaccuracy of the methods based on ground textures and space features in detecting new ground buildings, a novel Change Detection model based on Generative Adversarial Networks (CDGAN) was proposed. Firstly, a traditional image segmentation network (U-net) was improved by Focal loss function, and it was used as the Generator (G) of the model to generate the segmentation results of remote sensing images. Then, a convolutional neutral network with 16 layers (VGG-net) was designed as the Discriminator (D), which was used for discriminating the generated results and the Ground Truth (GT) results. Finally, the Generator and Discriminator were trained in an adversarial way to get a Generator with segmentation capability. The experimental results show that, the detection accuracy of CDGAN reaches 92%, and the IU (Intersection over Union) value of the model is 3.7 percentage points higher than that of the traditional U-net model, which proves that the proposed model effectively improves the detection accuracy of new ground buildings in remote sensing images.
Reference | Related Articles | Metrics
Reliable beacon-based and density-aware distance localization algorithm for wireless sensor network
QIAN Kaiguo, BU Chunfen, WANG Yujian, SHEN Shikai
Journal of Computer Applications    2019, 39 (3): 817-823.   DOI: 10.11772/j.issn.1001-9081.2018071661
Abstract410)      PDF (1083KB)(319)       Save
Traditional DV-Hop localization algorithm and Amorphous algorithm for Wireless Sensor Network (WSN) can not meet practical application with lower localization accuracy due to defects of colinearity of beacons, range ambiguity and the distance error caused by path deviation. Especially, in the node heterogeneously distributed application scenario, the problem becomes more serious. So, a Reliable beacon-based and Density-aware distance Localization Algorithm for WSN (RDLA) was proposed to improve localization accuracy. Firstly, hop threshold and reliability function of approximate equilateral triangle were employed to select the beacon nodes with small error to avoid collinear problem. Secondly, node density-aware hop distance estimation method was used to solve range ambiguity problem, and distances were cumulatived along the Shortest Hop Path (SHP) from unknown node to three beacons. This distance was amended to straight-line distance. Finally, two-dimensional hyperbolic calculation method was adopted to determine locations of unknown nodes and improve node location accuracy. The extensive simulation results by Matlab R2012a show that the Average Localization Error (ALE) of RDLA is lower than that of DV-Hop algorithm and its improvement algorithms in node uniform distribution network. Remarkably, RDLA is tremendously superior to the others with the lowest ALE in node non-uniform distribution network and C shape network, in which, the ALE is almost controlled below 28%.
Reference | Related Articles | Metrics
Review of network background traffic classification and identification
ZOU Tengkuan, WANG Yuying, WU Chengrong
Journal of Computer Applications    2019, 39 (3): 802-811.   DOI: 10.11772/j.issn.1001-9081.2018071552
Abstract1354)      PDF (1686KB)(785)       Save
Internet traffic classification is a process of identifying network applications and classifying corresponding traffic, which is considered as the most basic function of modern network management and security system. And application-related traffic classification is the basic technology of recent network security. Traditional traffic classification methods include port-based prediction methods and payload-based depth detection methods. In current network environment, there are some practical problems in traditional methods, such as dynamic ports and encryption applications. Therefore, Machine Learning (ML) technology based on traffic statistics is used to classify and identify traffic. Machine learning can realize centralized automatic search by using provided traffic data and describe useful structural patterns, which is helpful to intelligently classify traffic. Initially, Naive Bayes method was used to identify and classify network traffic classification, performing well on specific flows with accuracy over 90%, while on traffic such as peer-to-peer transmission network traffic (P2P) with accuracy only about 50%. Then, methods such as Support Vector Machine (SVM) and Neural Network (NN) were used, and neural network method could make accuracy of overall network classification reach 80% or more. A number of studies show that the use of a variety of machine learning methods and their improvements can improve the accuracy of traffic classification.
Reference | Related Articles | Metrics
Adaptive scale bilateral texture filtering method
WANG Hui, WANG Yue, LIU Changzu, ZHUANG Shanna, CAO Junjie
Journal of Computer Applications    2018, 38 (5): 1415-1419.   DOI: 10.11772/j.issn.1001-9081.2017102589
Abstract418)      PDF (901KB)(387)       Save
Almost all of existing works on structure-preserving texture smoothing utilize the statistical features of pixels within local rectangular patches to distinguish structures from textures.However, the patch sizes of the rectangular regions are single-scale, which may lead to texture over-smoothed or non-smoothed for images with sharp structures or structures at different scales. Thus, an adaptive scale bilateral texture filtering method was proposed. Firstly, the patch size of rectangular region for each pixel was chosen adaptively from some given candidate sizes based on statistical analysis of local patches, where larger patch sizes were chosen for the homogeneous texture regions and smaller ones for regions near the structure edges. Secondly, guided image were computed via the adaptive patch sizes. Finally, the guided bilateral filtering was operated on the original image. The experimental results demonstrate that the proposed method can better preserve image structures and smooth textures.
Reference | Related Articles | Metrics
Long text classification combined with attention mechanism
LU Ling, YANG Wu, WANG Yuanlun, LEI Zijian, LI Ying
Journal of Computer Applications    2018, 38 (5): 1272-1277.   DOI: 10.11772/j.issn.1001-9081.2017112652
Abstract2588)      PDF (946KB)(1133)       Save
News text usually consists of tens to hundreds of sentences, which has a large number of characters and contains more information that is not relevant to the topic, affecting the classification performance. In view of the problem, a long text classification method combined with attention mechanism was proposed. Firstly, a sentence was represented by a paragraph vector, and then a neural network attention model of paragraph vectors and text categories was constructed to calculate the sentence's attention. Then the sentence was filtered according to its contribution to the category, which value was mean square error of sentence attention vector. Finally, a classifier base on Convolutional Neural Network (CNN) was constructed. The filtered text and the attention matrix were respectively taken as the network input. Max pooling was used for feature filtering. Random dropout was used to reduce over-fitting. Experiments were conducted on data set of Chinese news text classification task, which was one of the shared tasks in Natural Language Processing and Chinese Computing (NLP&CC) 2014. The proposed method achieved 80.39% in terms of accuracy for the filtered text, which length was 82.74% of the text before filtering, yielded an accuracy improvement of considerable 2.1% compared to text before filtering. The emperimental results show that combining with attention mechanism, the proposed method can improve accuracy of long text classification while achieving sentence level information filtering.
Reference | Related Articles | Metrics
Slices reconstruction method for single image dedusting
WANG Yuanyu, ZHANG Yifan, WANG Yunfei
Journal of Computer Applications    2018, 38 (4): 1117-1120.   DOI: 10.11772/j.issn.1001-9081.2017092388
Abstract330)      PDF (824KB)(308)       Save
In order to solve the image degradation in the non-uniform dust environment with multiple scattering lights, a slices reconstruction method for single image dedusting was proposed. Firstly, the slices along the depth orientation were produced based on McCartney model in dust environment. Secondly, the joint dust detection method was used to detect dust patches in the slices where non-dust areas were reserved but the dust zones were marked as the candidate detected areas of the next slice image. Then, an image was reconstructed by combining these non-dust areas of each slice and the dust zone of the last slice. Finally, a restored image was obtained by a fast guided filter which was applied to the reconstructed area. The experimental results prove that the proposed restoration method can effectively and quickly get rid of dust in the image, and lay the foundation of object detection and recognition work based on computer vision in dust environment.
Reference | Related Articles | Metrics
Multi-keyword ciphertext search method in cloud storage environment
YANG Hongyu, WANG Yue
Journal of Computer Applications    2018, 38 (2): 343-347.   DOI: 10.11772/j.issn.1001-9081.2017071869
Abstract626)      PDF (963KB)(380)       Save
Aiming at the problem of low efficiency and lack of adaptive ability for the existing multi-keyword ciphertext search methods in cloud storage environment, a Multi-keyword Ranked Search over Encrypted cloud data based on Improved Quality Hierarchical Clustering (MRSE-IQHC) method was proposed. Firstly, the document vectors were constructed by Term Frequency-Inverse Document Frequency (TF-IDF) method and Vector Space Model (VSM). Secondly, the Improved Quality Hierarchical Clustering (IQHC) algorithm was proposed to cluster the document vectors, the document index and cluster index were constructed. Thirdly, the K-Nearest Neighbor (KNN) query algorithm was used to encrypt the indexes. Finally, the user-defined keyword weight was used to construct the search request and search for the top k relevant documents in ciphertext state. The experimental results show that compared with the Multi-keyword Ranked Search over Encrypted cloud data (MRSE) method and the Multi-keyword Ranked Search over Encrypted data based on Hierarchical Clustering Index (MRSE-HCI) method, the search time was shortened by 44.3% and 34.2%, 32.4% and 13.2%, 36.9% and 19.4% in the same number of search documents, retrieved documents and search keywords conditions, and the accuracy rate was increased by 10.8% and 8.6%. The proposed method MRSE-IQHC has high search efficiency and accuracy for multi-keyword ciphertext search in cloud storage environment.
Reference | Related Articles | Metrics
High fidelity haze removal method for remote sensing images based on estimation of haze thickness map
WANG Yueyun, HUANG Wei, WANG Rui
Journal of Computer Applications    2018, 38 (12): 3596-3600.   DOI: 10.11772/j.issn.1001-9081.2018051149
Abstract356)      PDF (969KB)(303)       Save
The haze removal of remote sensing image may easily result in ground object distortion. In order to solve the problem, an improved haze removal algorithm was proposed on the basis of the traditional additive haze pollution model, which was called high fidelity haze removal method based on estimation for Haze Thickness Map (HTM). Firstly, the HTM was obtained by using the traditional additive haze removal algorithm, and the mean value of the cloudless areas was subtracted from the whole HTM to ensure the haze thickness of the cloudless areas closed to zero. Then, the haze thickness of blue ground objects was estimated alone in degraded images. Finally, the cloudless image was obtained by subtracting the finally optimized haze thickness map of different bands from the degraded image. The experiments were carried out for multiple optical remote sensing images with different resolution. The experimental results show that, the proposed method can effectively solve the serious distortion problem of blue ground objects, improve the haze removal effect of degrade images, and promote the data fidelity ability of cloudless areas.
Reference | Related Articles | Metrics
Learning method of indoor scene semantic annotation based on texture information
ZHANG Yuanyuan, HUANG Yijun, WANG Yuefei
Journal of Computer Applications    2018, 38 (12): 3409-3413.   DOI: 10.11772/j.issn.1001-9081.2018040892
Abstract344)      PDF (880KB)(369)       Save
The manual processing method is mainly used for the detection, tracking and information editing of key objects in indoor scene video, which has the problems of low efficiency and low precision. In order to solve the problems, a new learning method of indoor scene semantic annotation based on texture information was proposed. Firstly, the optical flow method was used to obtain the motion information between video frames, and the key frame annotation and interframe motion information were used to initialize the annotation of non-key frames. Then, the image texture information constraint of non-key frames and its initialized annotation were used to construct an energy equation. Finally, the graph-cuts method was used for optimizing to obtain the solution of the energy equation, which was the non-key frame semantic annotation. The experimental results of the annotation accuracy and visual effects show that, compared with the motion estimation method and the model-based learning method, the proposed learning method of indoor scene semantic annotation based on texture information has the better effect. The proposed method can provide the reference for low-latency decision-making systems such as service robots, smart home and emergency response.
Reference | Related Articles | Metrics
Transfer learning based hierarchical attention neural network for sentiment analysis
QU Zhaowei, WANG Yuan, WANG Xiaoru
Journal of Computer Applications    2018, 38 (11): 3053-3056.   DOI: 10.11772/j.issn.1001-9081.2018041363
Abstract984)      PDF (759KB)(838)       Save
The purpose of document-level sentiment analysis is to predict users' sentiment expressed in the document. Traditional neural network-based methods rely on unsupervised word vectors. However, the unsupervised word vectors cannot exactly represent the contextual relationship of context and understand the context. Recurrent Neural Network (RNN) generally used to process sentiment analysis problems has complex structure and numerous model parameters. To address the above issues, a Transfer Learning based Hierarchical Attention Neural Network (TLHANN) was proposed. Firstly, an encoder was trained to understand the context with machine translation task for generating hidden vectors. Then, the encoder was transferred to sentiment analysis task by concatenating the hidden vector generated by the encoder with the corresponding unsupervised vector. The contextual relationship of context could be better represented by distributed representation. Finally, a two-level hierarchical network was applied to sentiment analysis task. A simplified RNN unit called Minimal Gate Unit (MGU) was arranged at each level leading to fewer parameters. The attention mechanism was used in the model for extracting important information. The experimental results show that, the accuracy of the proposed algorithm is increased by an avervage of 8.7% and 23.4% compared with the traditional neural network algorithm and Support Vector Machine (SVM).
Reference | Related Articles | Metrics
Adaptive backstepping sliding mode control for robotic manipulator with the improved nonlinear disturbance observer
ZOU Sifan, WU Guoqing, MAO Jingfeng, ZHU Weinan, WANG Yurong, WANG Jian
Journal of Computer Applications    2018, 38 (10): 2827-2832.   DOI: 10.11772/j.issn.1001-9081.2018030525
Abstract757)      PDF (799KB)(419)       Save
In order to solve the problems of control input chattering of traditional sliding mode, requiring acceleration term, and limited application model of traditional disturbance observes in manipulator joint position tracking, a self-adaptive inverse sliding mode control algorithm for manipulators with improved nonlinear disturbance observer was proposed. Firstly, an improved nonlinear disturbance observer to perform on-line testing was designed. In the sliding mode control law, interference estimates were added to compensate for observable disturbance, and then appropriate design parameters were selected to make the observation error converge exponentially; the adaptive control law was performed to estimate the unobservable interference and further improved the tracking performance of the control system. Finally, the Lyapunov function was used to verify the asymptotic stability of the closed-loop system, then the system was applied to the joint position tracking of the manipulator. The experimental results show that compared with the traditional sliding mode algorithm, the improved control algorithm not only accelerates the response speed of the system, but also effectively suppresses the system chattering, avoids measuring acceleration items and expands the application scope of the application model.
Reference | Related Articles | Metrics
High efficient construction of location fingerprint database based on matrix completion improved by backtracking search optimization
LI Lina, LI Wenhao, YOU Hongxiang, WANG Yue
Journal of Computer Applications    2017, 37 (7): 1893-1899.   DOI: 10.11772/j.issn.1001-9081.2017.07.1893
Abstract470)      PDF (1047KB)(444)       Save
To solve the problems existing in the off-line construction method of location fingerprint database for location fingerprint positioning based on Received Signal Strength Indication (RSSI), including large workload of collecting all the fingerprint information in the location, low construction efficiency of the location fingerprint database, and the limited precision of interpolation, a high efficient off-line construction method of the location fingerprint database based on the Singular Value Thresholding (SVT) Matrix Completion (MC) algorithm improved by the Backtracking Search optimization Algorithm (BSA) was proposed. Firstly, using the collected location fingerprint data of some reference nodes, a low-rank matrix completion model was established. Then the model was solved by the low rank MC algorithm based on the SVT. Finally, the complete location fingerprint database could be reconstructed in the location area. At the same time, the BSA was introduced to improve the optimization process of MC algorithm with the minimum kernel norm as the fitness function to solve the problem of the fuzzy optimal solution and the poor smoothness of the traditional MC theory, which could further improve the accuracy of the solution. The experimental results show that the average error between the location fingerprint database constructed by the proposed method and the actual collected location fingerprint database is only 2.7054 dB, and the average positioning error is only 0.0863 m, but nearly 50% of the off-line collection workload can be saved. The above results show that the proposed off-line construction method of the location fingerprint database can effectively reduce the workload of off-line collection stage while ensuring the accuracy, significantly improve the construction efficiency of location fingerprint database, and improve the practicability of fingerprint positioning method to a certain extent.
Reference | Related Articles | Metrics